The task is to extract images or signals accurately and even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g., the number of pixels in the image. This new technique draws from results in several fields
Suppose we are given a sparse signal.
Can we recover the signal with small number of measurements (far smaller than the desired resolution of the signal)?
The answer is _YES, for some signals and carefully selected measurements using $l_1$ minimization._
The reader should be familiar to elementary concepts about signals, with linear algebra concepts, and linear programming.
The reader should be able to recover a signal from a small number of measurements.
For more details see
Credits: Daniel Bragg, an IASTE Intern, performed testing of some of the methods.
Let $A\in\mathbb{R}^{m\times n}$ with $m<n$, $x\in\mathbb{R}^n$ and $b\in\mathbb{R}^m$.
The system $Ax=b$ is underdetermined.
$\|x\|_0$ is the number of nonzero entries of $x$ (a quasi-norm).
A matrix $A$ satisfies the restricted isometry property (RIP) of order $k$ with constant $\delta_k\in(0,1)$ if
$$ (1 − \delta_k )\|x\|_2^2 \leq \| Ax\|_2^2 \leq (1 + \delta_k)\| x\|_2^2 $$for any $x$ such that $\|x\|_0 \leq k$.
A mutual incoherence of a matrix $A$ is
$$ \mathcal{M}(A)= \max_{i \neq j} |[A^TA]_{ij}|, $$that is, the absolutely maximal inner product of distinct columns of $A$. If the columns of $A$ have unit norms, $\mathcal{M}(A)\in[0,1]$.
The spark of a given matrix $A$, $spark(A)$, is the smallest number of columns of $A$ that are linearly dependant.
An underdetermined system either has no solution or has infinitely many solutions.
The typical problem is to choose the solution of some minimal norm. This problem can be reformulated as a constrained optimization problem $$ \textrm{minimize}\ \|x\| \quad \textrm{subject to} \quad Ax=b. $$ In particular:
For any vector $b$, there exists at most one vector $x$ such that $\|x\|_0\leq k$ and $Ax=b$ if and only if $spark(A) > 2k$. This implies that $m\geq 2k$, which is a good choice when we are computing solutions which are exactly sparse.
If $k<\displaystyle \frac{1}{2} \left(1+\frac{1}{\mathcal{M}(A)}\right)$, then for any vector $b$ there exists at most one vector $x$ such that $\|x\|_0\leq k$ and $Ax=b$.
If the solution $x$ of $l_0$ problem satisfies $\|x\|_0 < \displaystyle \frac{\sqrt{2}-1/2}{\mathcal{M}(A)}$, then the solution of $l_1$ problem is the solution of $l_0$ problem!
If $A$ has columns of unit-norm, then $A$ satisfies the RIP of order $k$ with $\delta_k = (k − 1)\mathcal{M}(A)$ for all $k < 1/\mathcal{M}(A)$.
If $A$ satisfies RIP of order $2k$ with $\delta_{2k}<\sqrt{2}-1$, then the solution of $l_1$ problem is the solution of $l_0$ problem!
Checking whether the specific matrix has RIP is difficult. If $m ≥ C \cdot k \log\left(\displaystyle\frac{n}{k}\right)$, where $C$ is some constant depending on each instance, the following classes of matrices satisfy RIP with $\delta_{2k}<\sqrt{2}-1$ with overwhelming probability(the matrices are normalised to have columns with unit norms):
The compressive sensing interpretation is the following: the signal $x$ is reconstructed from samples with $m$ functionals (the rows of $A$).
In [1]:
using Random
Random.seed!(678)
m=5
n=8
A=rand(m,n)
b=rand(m)
A
Out[1]:
In [2]:
b
Out[2]:
In [3]:
using LinearAlgebra
x=A\b
U,σ,V=svd(A)
norm(A*x-b), norm( sum( [(U[:,k]'*b/σ[k])[1]*V[:,k] for k=1:m])-x)
Out[3]:
We recover randomly generated sparse signals "measured" with rows of the matrix $A$. The experiment is performed for types of matrices from Fact 9.
The $l_1$ minimization problem is solved using the function linprog()
from the package
MathProgBase.jl. This function requires the linear programming solver from the package Clp.jl be installed beforehand (it is a longer compilation). MathProgBase.jl
is now depraected in favour of
MathOptInterface.jl, but we keep it for this lecture for the sake of simplicity. Good choice is also JuMP.jl.
Random matrices are generated using the package Distributions.jl.
For more details see the documentation.
In [4]:
# import Pkg; Pkg.add("Clp")
# import Pkg; Pkg.add("GLPKMathProgInterface")
# import Pkg; Pkg.add("MathProgBase")
# import Pkg; Pkg.add("Distributions")
In [5]:
using Plots
using Clp
using GLPKMathProgInterface
# using Gurobi
using MathProgBase
using Distributions
In [6]:
varinfo(MathProgBase)
Out[6]:
In [7]:
l1 = linprog([-1,0],[2 1],'<',1.5,ClpSolver())
Out[7]:
In [8]:
fieldnames(typeof(l1))
Out[8]:
In [9]:
l1.attrs
Out[9]:
In [10]:
# Random vectors on a unit sphere
using Random
Random.seed!(421)
n=100
m=40
k=15
A=svd(rand(m,n)).Vt
using SparseArrays
# Compute a random vector
x=sprand(n,k/n)
Out[10]:
In [11]:
A
Out[11]:
In [12]:
for i=1:size(A,2)
normalize!(view(A,:,i))
end
In [13]:
diag(A'*A)
Out[13]:
In [14]:
# Check incoherence
μ=maximum(abs,A'*A-I)
Out[14]:
In [15]:
# Sampling
b=A*x
Out[15]:
In [16]:
# Recovery
c=ones(n)
l1=linprog(c,A,'=',b,0,Inf,ClpSolver())
Out[16]:
In [17]:
# Check
scatter(x)
Out[17]:
In [18]:
x
Out[18]:
In [19]:
scatter!(l1.sol)
Out[19]:
In [20]:
sparse(l1.sol)
Out[20]:
In [22]:
# for n=50:50:500, m=20:10:200, k=10:10:100
# m should not be too small
n=200
m=40
k=15
A=svd(rand(m,n)).Vt
for i=1:size(A,2)
normalize!(view(A,:,i))
end
# Compute a random vector
x=sprand(n,k/n)
# Sampling
b=A*x
# Recovery
l1=linprog(ones(n),A,'=',b,0.0,Inf,ClpSolver())
scatter([x l1.sol])
Out[22]:
In [23]:
# Normal distribution
# for n=50:50:500, m=20:10:200, k=10:10:100
n=200
m=50
k=10
A=rand(Normal(0,1/m),m,n)
for i=1:size(A,2)
normalize!(view(A,:,i))
end
# Compute a random vector
x=sprand(n,k/n)
# Sampling
b=A*x
# Recovery
l1=linprog(ones(n),A,'=',b,0.0,Inf,ClpSolver())
# l1=linprog(ones(n),A,'=',b,0.0,Inf,GurobiSolver())
scatter([x l1.sol])
Out[23]:
In [24]:
A
Out[24]:
In [25]:
# Symmetric Bernoulli distribution
# for n=50:50:500, m=20:10:200, k=10:10:100
n=200
m=50
k=15
# The matrix of (-1,1)s
A=2*(rand(Bernoulli(0.5),m,n).-0.5)
for i=1:size(A,2)
normalize!(view(A,:,i))
end
# Compute a random vector
x=sprand(n,k/n)
# Sampling
b=A*x
# Recovery
l1=linprog(ones(n),A,'=',b,0.0,Inf,ClpSolver())
scatter([x l1.sol])
Out[25]:
In [26]:
nnz(x)
Out[26]:
In [27]:
A
Out[27]:
For Fourier transformation and Fourier matrix we use package FFTW.jl.
In [28]:
using FFTW
In [29]:
varinfo(FFTW)
Out[29]:
In [30]:
# Fourier matrix
# for n=50:50:500, m=20:10:200, k=10:10:100
n=200
m=50
k=15
# Elegant way of computing the Fourier matrix
F=fft(Matrix(I,n,n),1)
# Select m/2 random rows
ind=Random.randperm(n)[1:round(Int,m/2)]
Fm=F[ind,:]
# We need to work with real matrices due to linprog()
A=[real(Fm); imag(Fm)]
for i=1:size(A,2)
normalize!(view(A,:,i))
end
# Compute a random vector
x=sprand(n,k/n)
# Sampling
b=A*x
# Recovery
l1=linprog(ones(n),A,'=',b,0.0,Inf,ClpSolver())
scatter([x l1.sol])
Out[30]:
In the presence of noise in observation, we want to recover a vector $x$ from $b=Ax + z$, where $z$ is a stochastic or deterministic unknown error term.
The hard thresholding operator, $H_k(x)$, sets all but the $k$ entries of $x$ with largest magnitude to zero.
The problem can be formulated as $l_1$ minimization problem $$ \textrm{minimize}\ \|x\|_1 \quad \textrm{subject to} \quad \|Ax-b\|_2^2\leq\epsilon, $$ where $\epsilon$ bounds the amount of noise in the data.
Assume that $A$ satisfies RIP of order $2k$ with $\delta_{2k}< \sqrt{2}-1$. Then the solution $x^{\star}$ of the above problem satisfies $$ \|x^{\star}-x\|_2 \leq C_0 \displaystyle \frac{1}{\sqrt{k}}\|x-H_k(x)\|_1 +C_1\epsilon, $$ where $x$ is the original signal.
The $l_1$ problem is a convex programming problem and can be efficiently solved. The solution methods are beyond the scope of this course.
If $k$ is known in advance, $A$ satisfies RIP with $\delta_{3k}<1/15$, and $\|A\|_2<1$, the $k$-sparse aproximation of $x$ can be computed by the Iterative Hard Thresholding algorithm
We construct the $k$ sparse $x$, form $b$, add noise, and recover it with the algorithm from Fact 4. The conditions on $A$ are rather restrictive, which means that $k$ must be rather small compared to $n$ and $m$ must be rather large. For convergence, we limit the number of iterations to $50m$.
In [42]:
n=300
# k is small compared to n
k=8
x=10*sprand(n,k/n)
# Reset k
k=nnz(x)
# Define m, rather large
m=5*round(Int,k*log(n/k))
# m=100
# Sampling matrix - normal distribution
A=rand(Normal(0,1/m),m,n)
# A=A/(norm(A)+1)
for i=1:size(A,2)
normalize!(view(A,:,i))
end
# Form b
b=A*x
# Add noise
noise=rand(m)*1e-5
b.+=noise
Out[42]:
In [43]:
m
Out[43]:
In [44]:
A
Out[44]:
In [45]:
norm(A),k,m
Out[45]:
In [46]:
x
Out[46]:
In [47]:
# Iterative Hard Thresholding
function H(x::Vector,k::Int)
y=deepcopy(x)
ind=sortperm(abs.(y),rev=true)
y[ind[k+1:end]].=0
y
end
Out[47]:
In [48]:
function IHT(A::Matrix, b::Vector,k::Int)
# Tolerance
τ=1e-12
x=zeros(size(A,2))
for i=1:50*m
x=H(x+A'*(b-A*x),k)
end
x
end
Out[48]:
In [49]:
y=IHT(A,b,k)
norm(A*x-b)/norm(b)
Out[49]:
In [50]:
println(x)
In [51]:
println(sparse(y))
In [52]:
scatter([x y])
Out[52]:
Let us try linear programing in the case of noisy observations.
In [53]:
# Try with noise
# Normal distribution
# for n=50:50:500, m=20:10:200, k=10:10:100
n=300
# k is small compared to n
k=8
m=5*round(Int,k*log(n/k))
A=rand(Normal(0,1/m),m,n)
for i=1:size(A,2)
normalize!(view(A,:,i))
end
# Compute a random vector
x=1*sprand(n,k/n)
# Sampling with noise
b=A*x
# Add noise
noise=(rand(m).-0.5)*1e-3
b.+=noise
# Recovery
l1=linprog(ones(n),A,'=',b,0.0,Inf,ClpSolver())
# method=:Simplex or method=:InteriorPoint
#l1=linprog(ones(n),A,'=',b,0.0,Inf,
# GLPKSolverLP(presolve=true,method=:Simplex))
Out[53]:
In [54]:
scatter([x l1.sol])
Out[54]:
In [55]:
x
Out[55]:
In [56]:
sparse(l1.sol)
Out[56]:
In [57]:
# But
l1.sol[31]
Out[57]:
Wavelet transformation of an image is essentially sparse, since only small number of cofficients is significant. This fact can be used for compression.
Wavelet transforms are implemented the package Wavelets.jl.
The tif
version of the image has 65_798
bytes, the png
version has 58_837
bytes, and the jpeg
version has 26_214
bytes.
In [58]:
# import Pkg; Pkg.add("Wavelets")
# import Pkg; Pkg.add("TestImages")
In [59]:
using Wavelets
using Images
using TestImages
In [60]:
varinfo(TestImages)
Out[60]:
In [61]:
println(TestImages.remotefiles)
In [62]:
img=testimage("lena_gray_256.tif")
Out[62]:
In [63]:
typeof(img)
Out[63]:
In [64]:
size(img)
Out[64]:
In [65]:
show(img[1,1])
In [66]:
# Convert the image to 32 or 64 bit floats
x=map(Float32,img)
" Number of matrix elements = ",prod(size(x)),
" Bytes = ",sizeof(x), size(x)
Out[66]:
In [67]:
?modwt
Out[67]:
In [83]:
# Compute the wavelet transform of x
# or wavelet(WT.db3)
wlet=wavelet(WT.Coiflet{4}(), WT.Filter, WT.Periodic)
# wlet=wavelet(WT.haar)
xₜ=dwt(x,wlet,2);
In [84]:
colorview(Gray,real(xₜ))
Out[84]:
We now set all but the 10% absolutely largest coefficients to zero and reconstruct the image. The images are very similar, which illustrates that the wavelet transform of an image is essentially sparse.
In [85]:
ind=sortperm(abs.(vec(xₜ)),rev=true)
# 0.1 = 10%, try also 0.05 = 5%
τ=0.05
k=round(Int,τ*length(ind))
xsparse=copy(xₜ)
xsparse[ind[k+1:end]].=0;
In [86]:
using SparseArrays
nnz(sparse(xsparse))
Out[86]:
In [87]:
colorview(Gray,xsparse)
Out[87]:
In [89]:
# Inverse wavelet transform of the sparse data
imgsparse=idwt(xsparse,wlet,2)
# Original and sparse image
display(img)
display(colorview(Gray,imgsparse))
There are $k=3277 $ nonzero coefficients in a sparse wavelet representation.
Is there the sensing matrix which can be used to sample and recover xsparse
?
Actual algorithms are elaborate. For more details see J. Romberg, Imaging via Compressive Sampling and Duarte et al.,Single-Pixel Imaging via Compressive Sampling.
In [90]:
using SparseArrays
nnz(sparse(xsparse))
Out[90]:
In [91]:
# Columnwise number of non-zeros
nz=[nnz(sparse(xsparse[:,i])) for i=1:size(x,2)]
display(nz)
maximum(nz)
Out[91]:
In [92]:
n=size(xsparse,2)
m=150
A=rand(Normal(0,1/m),m,n)
for i=1:size(A,2)
normalize!(view(A,:,i))
end
# Sampling (columnwise)
b=A*xsparse
Out[92]:
In [93]:
# Recovery
xrecover=similar(xsparse)
for l=1:size(xsparse,2)
l1=linprog(ones(n),A,'=',b[:,l],ClpSolver())
xrecover[:,l]=l1.sol
end
In [94]:
# Mutual incoherence
maximum(abs.(A'*A-I))
Out[94]:
In [95]:
length(b),sizeof(b),size(b)
Out[95]:
In [96]:
sizeof(A)
Out[96]:
In [98]:
imgrecover=idwt(xrecover, wlet, 2)
# imgrecover=idct(xrecover)
# Original and recovered image
display(img)
display(colorview(Gray,imgrecover))
Let us also test the method on a simpler image.
In [99]:
img=load("./files/RDuarte.png")
Out[99]:
In [100]:
size(img)
Out[100]:
In [101]:
typeof(img)
Out[101]:
In [103]:
x=map(Gray,img)
x=map(Float32,x)
xₜ=dwt(x,wlet,2)
Out[103]:
In [104]:
sort(abs.(xₜ[:]),rev=true)
Out[104]:
In [105]:
ind=sortperm(abs.(vec(xₜ)),rev=true)
# 0.1 = 10%, try also 0.05 = 5%
τ=0.05
k=round(Int,τ*length(ind))
xsparse=copy(xₜ)
# xsparse.*=abs.(xsparse).>=0.3
xsparse[ind[k+1:end]].=0
# Inverse wavelet transform of the sparse data
imgsparse=idwt(xsparse,wlet,2)
# Original and sparse image
display(img)
display(colorview(Gray,imgsparse))
" Number of non-zero elements = ",nnz(sparse(xsparse))
Out[105]:
In [106]:
# Columnwise number of non-zeros
nz=[nnz(sparse(xsparse[:,i])) for i=1:size(x,2)]
display(nz)
maximum(nz)
Out[106]:
In [122]:
# Sampling
n=size(xsparse,2)
m=100
A=rand(Normal(0,1/m),m,n)
for i=1:size(A,2)
normalize!(view(A,:,i))
end
# Sampling (columnwise)
b=A*xsparse
Out[122]:
In [123]:
# Recovery
xrecover=similar(xsparse)
for l=1:size(xsparse,2)
l1=linprog(ones(n),A,'=',b[:,l],0,Inf,
ClpSolver())
xrecover[:,l]=l1.sol
end
In [124]:
A
Out[124]:
In [125]:
xsparse
Out[125]:
In [126]:
xrecover
Out[126]:
In [127]:
imgrecover=idwt(xrecover,wlet)
# Original and recovered image
display(img)
display(colorview(Gray,imgrecover))